156 research outputs found

    An interface to retrieve personal memories using an iconic visual language

    Get PDF
    Relevant past events can be remembered when visualizing related pictures. The main difficulty is how to find these photos in a large personal collection. Query definition and image annotation are key issues to overcome this problem. The former is relevant due to the diversity of the clues provided by our memory when recovering a past moment and the later because images need to be annotated with information regarding those clues to be retrieved. Consequently, tools to recover past memories should deal carefully with these two tasks. This paper describes a user interface designed to explore pictures from personal memories. Users can query the media collection in several ways and for this reason an iconic visual language to define queries is proposed. Automatic and semi-automatic annotation is also performed using the image content and the audio information obtained when users show their images to others. The paper also presents the user interface evaluation based on tests with 58 participants

    An explorative study of interface support for image searching

    Get PDF
    In this paper we study interfaces for image retrieval systems. Current image retrieval interfaces are limited to providing query facilities and result presentation. The user can inspect the results and possibly provide feedback on their relevance for the current query. Our approach, in contrast, encourages the user to group and organise their search results and thus provide more fine-grained feedback for the system. It combines the search and management process, which - according to our hypothesis - helps the user to onceptualise their search tasks and to overcome the query formulation problem. An evaluation, involving young design-professionals and di®erent types of information seeking scenarios, shows that the proposed approach succeeds in encouraging the user to conceptualise their tasks and that it leads to increased user satisfaction. However, it could not be shown to increase performance. We identify the problems in the current setup, which when eliminated should lead to more effective searching overall

    Prosemantic features for content-based image retrieval

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-18449-9_8Revised Selected Papers of 7th International Workshop, AMR 2009, Madrid, Spain, September 24-25, 2009We present here, an image description approach based on prosemantic features. The images are represented by a set of low-level features related to their structure and color distribution. Those descriptions are fed to a battery of image classifiers trained to evaluate the membership of the images with respect to a set of 14 overlapping classes. Prosemantic features are obtained by packing together the scores. To verify the effectiveness of the approach, we designed a target search experiment in which both low-level and prosemantic features are embedded into a content-based image retrieval system exploiting relevance feedback. The experiments show that the use of prosemantic features allows for a more successful and quick retrieval of the query images

    Enhancing Video Recommendation Using Multimedia Content

    Get PDF
    Video recordings are complex media types. When we watch a movie, we can effortlessly register a lot of details conveyed to us (by the author) through different multimedia channels, in particular, the audio and visual modalities. To date, majority of movie recommender systems use collaborative filtering (CF) models or content-based filtering (CBF) relying on metadata (e.g., editorial such as genre or wisdom of the crowd such as user-generated tags) at their core since they are human-generated and are assumed to cover the 'content semantics' of movies by a great degree. The information obtained from multimedia content and learning from muli-modal sources (e.g., audio, visual and metadata) on the other hand, offers the possibility of uncovering relationships between modalities and obtaining an in-depth understanding of natural phenomena occurring in a video. These discerning characteristics of heterogeneous feature sets meet users' differing information needs. In the context of this Ph.D. thesis [9], which is briefly summarized in the current extended abstract, approaches to automated extraction of multimedia information from videos and their integration with video recommender systems have been elaborated, implemented, and analyzed. Variety of tasks related to movie recommendation using multimedia content have been studied. The results of this thesis can motivate the fact that recommender system research can benefit from knowledge in multimedia signal processing and machine learning established over the last decades for solving various recommendation tasks

    Investigating Guided Extensive Reading And Vocabulary Knowledge Performance Among Remedial Esl Learners In A Public University In Malaysia

    Get PDF
    Penyelidikan menyokong pembacaan ekstensif, yang tertumpu pada pembelajaran kebetulan (incidental learning), sebagai wadah utama bagi perkembangan pengetahuan kosa kata bahasa kedua/asing. Research supports extensive reading, which draws on incidental learning, as a primary tool for second/foreign language vocabulary knowledge development
    corecore